32 research outputs found
Communication under Strong Asynchronism
We consider asynchronous communication over point-to-point discrete
memoryless channels. The transmitter starts sending one block codeword at an
instant that is uniformly distributed within a certain time period, which
represents the level of asynchronism. The receiver, by means of a sequential
decoder, must isolate the message without knowing when the codeword
transmission starts but being cognizant of the asynchronism level A. We are
interested in how quickly can the receiver isolate the sent message,
particularly in the regime where A is exponentially larger than the codeword
length N, which we refer to as `strong asynchronism.'
This model of sparse communication may represent the situation of a sensor
that remains idle most of the time and, only occasionally, transmits
information to a remote base station which needs to quickly take action.
The first result shows that vanishing error probability can be guaranteed as
N tends to infinity while A grows as Exp(N*k) if and only if k does not exceed
the `synchronization threshold,' a constant that admits a simple closed form
expression, and is at least as large as the capacity of the synchronized
channel. The second result is the characterization of a set of achievable
strictly positive rates in the regime where A is exponential in N, and where
the rate is defined with respect to the expected delay between the time
information starts being emitted until the time the receiver makes a decision.
As an application of the first result we consider antipodal signaling over a
Gaussian channel and derive a simple necessary condition between A, N, and SNR
for achieving reliable communication.Comment: 26 page
Optimal Sequential Frame Synchronization
We consider the `one-shot frame synchronization problem' where a decoder
wants to locate a sync pattern at the output of a channel on the basis of
sequential observations. We assume that the sync pattern of length N starts
being emitted at a random time within some interval of size A, that
characterizes the asynchronism level between the transmitter and the receiver.
We show that a sequential decoder can optimally locate the sync pattern, i.e.,
exactly, without delay, and with probability approaching one as N tends to
infinity, if and only if the asynchronism level grows as O(exp(N*k)), with k
below the `synchronization threshold,' a constant that admits a simple
expression depending on the channel. This constant is the same as the one that
characterizes the limit for reliable asynchronous communication, as was
recently reported by the authors. If k exceeds the synchronization threshold,
any decoder, sequential or non-sequential, locates the sync pattern with an
error that tends to one as N tends to infinity. Hence, a sequential decoder can
locate a sync pattern as well as the (non-sequential) maximum likelihood
decoder that operates on the basis of output sequences of maximum length A+N-1,
but with much fewer observations.Comment: 6 page
Energy and Sampling Constrained Asynchronous Communication
The minimum energy, and, more generally, the minimum cost, to transmit one
bit of information has been recently derived for bursty communication when
information is available infrequently at random times at the transmitter. This
result assumes that the receiver is always in the listening mode and samples
all channel outputs until it makes a decision. If the receiver is constrained
to sample only a fraction f>0 of the channel outputs, what is the cost penalty
due to sparse output sampling?
Remarkably, there is no penalty: regardless of f>0 the asynchronous capacity
per unit cost is the same as under full sampling, ie, when f=1. There is not
even a penalty in terms of decoding delay---the elapsed time between when
information is available until when it is decoded. This latter result relies on
the possibility to sample adaptively; the next sample can be chosen as a
function of past samples. Under non-adaptive sampling, it is possible to
achieve the full sampling asynchronous capacity per unit cost, but the decoding
delay gets multiplied by 1/f. Therefore adaptive sampling strategies are of
particular interest in the very sparse sampling regime.Comment: Submitted to the IEEE Transactions on Information Theor
A Simple Message-Passing Algorithm for Compressed Sensing
We consider the recovery of a nonnegative vector x from measurements y = Ax,
where A is an m-by-n matrix whos entries are in {0, 1}. We establish that when
A corresponds to the adjacency matrix of a bipartite graph with sufficient
expansion, a simple message-passing algorithm produces an estimate \hat{x} of x
satisfying ||x-\hat{x}||_1 \leq O(n/k) ||x-x(k)||_1, where x(k) is the best
k-sparse approximation of x. The algorithm performs O(n (log(n/k))^2 log(k))
computation in total, and the number of measurements required is m = O(k
log(n/k)). In the special case when x is k-sparse, the algorithm recovers x
exactly in time O(n log(n/k) log(k)). Ultimately, this work is a further step
in the direction of more formally developing the broader role of
message-passing algorithms in solving compressed sensing problems
Training-Based Schemes are Suboptimal for High Rate Asynchronous Communication
We consider asynchronous point-to-point communication. Building on a recently
developed model, we show that training based schemes, i.e., communication
strategies that separate synchronization from information transmission, perform
suboptimally at high rate.Comment: To appear in the proceedings of the 2009 IEEE Information Theory
Workshop (Taormina
Update-Efficiency and Local Repairability Limits for Capacity Approaching Codes
Motivated by distributed storage applications, we investigate the degree to
which capacity achieving encodings can be efficiently updated when a single
information bit changes, and the degree to which such encodings can be
efficiently (i.e., locally) repaired when single encoded bit is lost.
Specifically, we first develop conditions under which optimum
error-correction and update-efficiency are possible, and establish that the
number of encoded bits that must change in response to a change in a single
information bit must scale logarithmically in the block-length of the code if
we are to achieve any nontrivial rate with vanishing probability of error over
the binary erasure or binary symmetric channels. Moreover, we show there exist
capacity-achieving codes with this scaling.
With respect to local repairability, we develop tight upper and lower bounds
on the number of remaining encoded bits that are needed to recover a single
lost bit of the encoding. In particular, we show that if the code-rate is
less than the capacity, then for optimal codes, the maximum number
of codeword symbols required to recover one lost symbol must scale as
.
Several variations on---and extensions of---these results are also developed.Comment: Accepted to appear in JSA
Iterative algorithms for lossy source coding
Thesis (M. Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Includes bibliographical references (p. 65-68).This thesis explores the problems of lossy source coding and information embedding. For lossy source coding, we analyze low density parity check (LDPC) codes and low density generator matrix (LDGM) codes for quantization under a Hamming distortion. We prove that LDPC codes can achieve the rate-distortion function. We also show that the variable node degree of any LDGM code must become unbounded for these codes to come arbitrarily close to the rate-distortion bound. For information embedding, we introduce the double-erasure information embedding channel model. We develop capacity-achieving codes for the double-erasure channel model. Furthermore, we show that our codes can be efficiently encoded and decoded using belief propagation techniques. We also discuss a generalization of the double-erasure model which shows that the double-erasure model is closely related to other models considered in the literature.by Venkat Chandar.M.Eng.and S.B